Goto

Collaborating Authors

 moral judgement


AI-washing: The Asymmetric Effects of Its Two Types on Consumer Moral Judgments

Nyilasy, Greg, Gangadharbatla, Harsha

arXiv.org Artificial Intelligence

As AI hype continues to grow, organizations face pressure to broadcast or downplay purported AI initiatives - even when contrary to truth. This paper introduces AI-washing as overstating (deceptive boasting) or understating (deceptive denial) a company's real AI usage. A 2x2 experiment (N = 401) examines how these false claims affect consumer attitudes and purchase intentions. Results reveal a pronounced asymmetry: deceptive denial evokes more negative moral judgments than honest negation, while deceptive boasting has no effects. We show that perceived betrayal mediates these outcomes. By clarifying how AI-washing erodes trust, the study highlights clear ethical implications for policymakers, marketers, and researchers striving for transparency.


M$^3$oralBench: A MultiModal Moral Benchmark for LVLMs

Yan, Bei, Zhang, Jie, Chen, Zhiyuan, Shan, Shiguang, Chen, Xilin

arXiv.org Artificial Intelligence

Recently, large foundation models, including large language models (LLMs) and large vision-language models (LVLMs), have become essential tools in critical fields such as law, finance, and healthcare. As these models increasingly integrate into our daily life, it is necessary to conduct moral evaluation to ensure that their outputs align with human values and remain within moral boundaries. Previous works primarily focus on LLMs, proposing moral datasets and benchmarks limited to text modality. However, given the rapid development of LVLMs, there is still a lack of multimodal moral evaluation methods. To bridge this gap, we introduce M$^3$oralBench, the first MultiModal Moral Benchmark for LVLMs. M$^3$oralBench expands the everyday moral scenarios in Moral Foundations Vignettes (MFVs) and employs the text-to-image diffusion model, SD3.0, to create corresponding scenario images. It conducts moral evaluation across six moral foundations of Moral Foundations Theory (MFT) and encompasses tasks in moral judgement, moral classification, and moral response, providing a comprehensive assessment of model performance in multimodal moral understanding and reasoning. Extensive experiments on 10 popular open-source and closed-source LVLMs demonstrate that M$^3$oralBench is a challenging benchmark, exposing notable moral limitations in current models. Our benchmark is publicly available.


Decoding moral judgement from text: a pilot study

Gherman, Diana E., Zander, Thorsten O.

arXiv.org Artificial Intelligence

Moral judgement is a complex human reaction that engages cognitive and emotional dimensions. While some of the morality neural correlates are known, it is currently unclear if we can detect moral violation at a single-trial level. In a pilot study, here we explore the feasibility of moral judgement decoding from text stimuli with passive brain-computer interfaces. For effective moral judgement elicitation, we use video-audio affective priming prior to text stimuli presentation and attribute the text to moral agents. Our results show that further efforts are necessary to achieve reliable classification between moral congruency vs. incongruency states. We obtain good accuracy results for neutral vs. morally-charged trials. With this research, we try to pave the way towards neuroadaptive human-computer interaction and more human-compatible large language models (LLMs)


Values, Ethics, Morals? On the Use of Moral Concepts in NLP Research

Vida, Karina, Simon, Judith, Lauscher, Anne

arXiv.org Artificial Intelligence

With language technology increasingly affecting individuals' lives, many recent works have investigated the ethical aspects of NLP. Among other topics, researchers focused on the notion of morality, investigating, for example, which moral judgements language models make. However, there has been little to no discussion of the terminology and the theories underpinning those efforts and their implications. This lack is highly problematic, as it hides the works' underlying assumptions and hinders a thorough and targeted scientific debate of morality in NLP. In this work, we address this research gap by (a) providing an overview of some important ethical concepts stemming from philosophy and (b) systematically surveying the existing literature on moral NLP w.r.t. their philosophical foundation, terminology, and data basis. For instance, we analyse what ethical theory an approach is based on, how this decision is justified, and what implications it entails. Our findings surveying 92 papers show that, for instance, most papers neither provide a clear definition of the terms they use nor adhere to definitions from philosophy. Finally, (c) we give three recommendations for future research in the field. We hope our work will lead to a more informed, careful, and sound discussion of morality in language technology.


Artificial intelligence moral agent as Adam Smith's impartial spectator

Tomczak, Nikodem

arXiv.org Artificial Intelligence

Adam Smith developed a version of moral philosophy where better decisions are made by interrogating an impartial spectator within us. We discuss the possibility of using an external non-human-based substitute tool that would augment our internal mental processes and play the role of the impartial spectator. Such tool would have more knowledge about the world, be more impartial, and would provide a more encompassing perspective on moral assessment.


Artificial intelligence: ChatGPT statements can influence users' moral judgements

#artificialintelligence

Human responses to moral dilemmas can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The findings indicate that users may underestimate the extent to which their own moral judgements can be influenced by the chatbot. Sebastian Krügel and colleagues asked ChatGPT (powered by the artificial intelligence language processing model Generative Pretrained Transformer 3) multiple times whether it is right to sacrifice the life of one person in order to save the lives of five others. They found that ChatGPT wrote statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance. The authors then presented 767 US participants, who were on average 39 years old, with one of two moral dilemmas that required them to choose whether to sacrifice one person's life to save five others.


Explainable Patterns for Distinction and Prediction of Moral Judgement on Reddit

Efstathiadis, Ion Stagkos, Paulino-Passos, Guilherme, Toni, Francesca

arXiv.org Artificial Intelligence

The forum r/AmITheAsshole in Reddit hosts discussion on moral issues based on concrete narratives presented by users. Existing analysis of the forum focuses on its comments, and does not make the underlying data publicly available. In this paper we build a new dataset of comments and also investigate the classification of the posts in the forum. Further, we identify textual patterns associated with the provocation of moral judgement by posts, with the expression of moral stance in comments, and with the decisions of trained classifiers of posts and comments.


The AI oracle of Delphi uses the problems of Reddit to offer dubious moral advice

#artificialintelligence

Got a moral quandary you don't know how to solve? Why not turn to the wisdom of artificial intelligence, aka Ask Delphi: an intriguing research project from the Allen Institute for AI that offers answers to ethical dilemmas while demonstrating in wonderfully clear terms why we shouldn't trust software with questions of morality. Ask Delphi was launched on October 14th, along with a research paper describing how it was made. From a user's point of view, though, the system is beguilingly simple to use. Just head to the website, outline pretty much any situation you can think of, and Delphi will come up with a moral judgement. Since Ask Delphi launched, its nuggets of wisdom have gone viral in news stories and on social media.


Requisite Variety in Ethical Utility Functions for AI Value Alignment

Aliman, Nadisha-Marie, Kester, Leon

arXiv.org Artificial Intelligence

Being a complex subject of major importance in AI Safety research, value alignment has been studied from various perspectives in the last years. However, no final consensus on the design of ethical utility functions facilitating AI value alignment has been achieved yet. Given the urgency to identify systematic solutions, we postulate that it might be useful to start with the simple fact that for the utility function of an AI not to violate human ethical intuitions, it trivially has to be a model of these intuitions and reflect their variety $ - $ whereby the most accurate models pertaining to human entities being biological organisms equipped with a brain constructing concepts like moral judgements, are scientific models. Thus, in order to better assess the variety of human morality, we perform a transdisciplinary analysis applying a security mindset to the issue and summarizing variety-relevant background knowledge from neuroscience and psychology. We complement this information by linking it to augmented utilitarianism as a suitable ethical framework. Based on that, we propose first practical guidelines for the design of approximate ethical goal functions that might better capture the variety of human moral judgements. Finally, we conclude and address future possible challenges.


7 Skills That Aren't About to Be Automated

#artificialintelligence

Today's young professionals grew up in an age of mind-boggling technological change, seeing the growth of the internet, the invention of the smartphone, and the development of machine-learning systems. These advances all point toward the total automation of our lives, including the way we work and do business. It's no wonder, then, that young people are anxious about their ability to compete in the job market. As executives who have spent our lives assessing and implementing digital technology in every type of organization, we often get asked by them: "What should I learn today so that I'll have a job in the future?" In what follows we'll share seven skills that can not only make you unable to be automated, but will make you employable no matter what the future holds.